Restoration and Enhancement of Solo Guitar Recordings Based on Sound Source Modeling*
نویسنده
چکیده
Audio enhancement is a wide concept and is closely related to audio restoration. An intuitive idea of audio enhancement is associated with any audio processing that is able to improve the perceptual quality of an audio signal. The goal of the digital audio restoration field [1] is, ideally, to improve the quality of audio signals extracted from old recordings, such as wax cylinders, 78 rpm, longplaying records, magnetic tape, and even digital media matrices. The usual approach consists of finding the best way to capture and transfer the recorded sound from the original matrices to a digital medium and, after that, applying digital signal-processing techniques to remove any disturbance or noise produced by the recording and reproducing system. The most common tasks of audio restoration algorithms are to remove impulsive noise and reduce broad-band noise from the degraded audio sources. Whereas localized disturbances, at least those of short duration, are relatively easy to treat, dealing with global types of degradation is still a challenging task. In particular, in the broad-band noise-reduction problem the goal is to find better tradeoffs between effective noise reduction and signal distortion [1], [2]. Although the perceptual quality of the restored signals plays an important role in this matter, only recently have psychoacoustic criteria been proposed for audio enhancement purposes [3], [4], still bounded by the lack of an observable clean reference signal. Usually audio restoration algorithms employ signal modeling techniques, which deal with the information available in the surface presentation of audio signals, that is, the attempt to model the waveform representation of the audio signal. In sound source modeling (SSM) techniques, however, the goal is to model the phenomenon that has generated the waveform. As a natural consequence, a structured audio representation [5] is required in SSM. In addition to SSM, models for the propagation medium and the receptor characteristics have been increasingly employed in audio signal processing. In [6] a general framework for audio and musical signal processing is described. It shows the hierarchical scales and relationships among several levels of audio representations. In fact, actual challenging audio signal-processing applications seem to move toward the incorporation of higher representation levels of audio signals, such as the objectand content-based ones. Among those applications it is possible to cite sound source recognition [7], sound source separation [8], music retrieval, automatic transcription of music [9], object-based sound source modeling [10], and sound synthesis [11]. Due to the requirement of a structured audio representation when using SSM, its practical use for the analysis and synthesis of audio signals is still limited to specific cases. It is easy to see that for general cases, such as analysis and synthesis of polyphonic music, the SSM-based system faces difficult tasks. The analysis part requires
منابع مشابه
Audio Restoration Using Sound Source Modeling
This paper presents new propositions to audio restoration and enhancement based on Sound Source Modeling (SSM). The main motivation is to take advantage of prior information of generative models of sound sources when restoring or enhancing musical signals. We describe a case based on the commuted waveguide synthesis algorithm for plucked string tones and devise a scheme to extend the bandwidth ...
متن کاملJazz Ensemble Expressive Performance Modeling
Computational expressive music performance studies the analysis and characterisation of the deviations that a musician introduces when performing a musical piece. It has been studied in a classical context where timing and dynamic deviations are modeled using machine learning techniques. In jazz music, work has been done previously on the study of ornament prediction in guitar performance, as w...
متن کاملAutomatic transcription of bass guitar tracks applied for music genre classification and sound synthesis
Music recordings most often consist of multiple instrument signals, which overlap in time and frequency. In the field of Music Information Retrieval (MIR), existing algorithms for the automatic transcription and analysis of music recordings aim to extract semantic information from mixed audio signals. In the last years, it was frequently observed that the algorithm performance is limited due to...
متن کاملElectric Guitar Playing Technique Detection in Real-World Recording Based on F0 Sequence Pattern Recognition
For a complete transcription of a guitar performance, the detection of playing techniques such as bend and vibrato is important, because playing techniques suggest how the melody is interpreted through the manipulation of the guitar strings. While existing work mostly focused on playing technique detection for individual single notes, this paper attempts to expand this endeavor to recordings of...
متن کاملInstrument Recognition Beyond Separate Notes - Indexing Continuous Recordings
Some initial works have appeared that began to deal with the complicated task of musical instrument recognition in multi-instrumental music. Although quite a few papers have already appeared on instrument recognition of singleinstrument musical phrases (“solos”), the work on solo recognition is not yet exhausted. The knowledge of how to deal well with solos can also help in recognition of multi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2002